Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum Optimization
نویسنده
چکیده
We study the conditions under which one is able to efficiently apply variance-reduction and acceleration schemes on finite sum optimization problems. First, we show that, perhaps surprisingly, the finite sum structure by itself, is not sufficient for obtaining a complexity bound of Õ((n + L/μ) ln(1/ǫ)) for L-smooth and μ-strongly convex individual functions one must also know which individual function is being referred to by the oracle at each iteration. Next, we show that for a broad class of first-order and coordinate-descent finite sum algorithms (including, e.g., SDCA, SVRG, SAG), it is not possible to get an ‘accelerated’ complexity bound of Õ((n+ √ nL/μ) ln(1/ǫ)), unless the strong convexity parameter is given explicitly. Lastly, we show that when this class of algorithms is used for minimizing L-smooth and convex finite sums, the optimal complexity bound is Õ(n + L/ǫ), assuming that (on average) the same update rule is used in every iteration, and Õ(n+ √
منابع مشابه
Limitations on Variance-Reduction and Acceleration Schemes for Finite Sums Optimization
We study the conditions under which one is able to efficiently apply variancereduction and acceleration schemes on finite sum optimization problems. First, we show that, perhaps surprisingly, the finite sum structure by itself, is not sufficient for obtaining a complexity bound of Õ((n + L/μ) ln(1/ )) for L-smooth and μ-strongly convex individual functions one must also know which individual fu...
متن کاملStochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure
Stochastic optimization algorithms with variance reduction have proven successful for minimizing large finite sums of functions. However, in the context of empirical risk minimization, it is often helpful to augment the training set by considering random perturbations of input examples. In this case, the objective is no longer a finite sum, and the main candidate for optimization is the stochas...
متن کاملParallel Asynchronous Stochastic Variance Reduction for Nonconvex Optimization
Nowadays, asynchronous parallel algorithms have received much attention in the optimization field due to the crucial demands for modern large-scale optimization problems. However, most asynchronous algorithms focus on convex problems. Analysis on nonconvex problems is lacking. For the Asynchronous Stochastic Descent (ASGD) algorithm, the best result from (Lian et al., 2015) can only achieve an ...
متن کاملUpdating finite element model using frequency domain decomposition method and bees algorithm
The following study deals with the updating the finite element model of structures using the operational modal analysis. The updating process uses an evolutionary optimization algorithm, namely bees algorithm which applies instinctive behavior of honeybees for finding food sources. To determine the uncertain updated parameters such as geometry and material properties of the structure, local and...
متن کاملEffect of Objective Function on the Optimization of Highway Vertical Alignment by Means of Metaheuristic Algorithms
The main purpose of this work is the comparison of several objective functions for optimization of the vertical alignment. To this end, after formulation of optimum vertical alignment problem based on different constraints, the objective function was considered as four forms including: 1) the sum of the absolute value of variance between the vertical alignment and the existing ground; 2) the su...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1706.01686 شماره
صفحات -
تاریخ انتشار 2017